[smoke][bugfix] moe_init_routing_v2 active_expert_range use int type#5521
Merged
wangxiyuan merged 1 commit intovllm-project:mainfrom Dec 31, 2025
Merged
[smoke][bugfix] moe_init_routing_v2 active_expert_range use int type#5521wangxiyuan merged 1 commit intovllm-project:mainfrom
wangxiyuan merged 1 commit intovllm-project:mainfrom
Conversation
Contributor
There was a problem hiding this comment.
Code Review
This pull request addresses a bug where num_local_experts could be a torch.Tensor, causing a type error in the npu_moe_init_routing_v2 kernel which expects an integer for its active_expert_range parameter. The fix correctly handles this by checking if num_local_experts is a tensor and extracting its value with .item(), or casting it to an integer otherwise. This change is correct and effectively resolves the issue.
ceb69aa to
16211fe
Compare
Contributor
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
16211fe to
fa14124
Compare
Signed-off-by: shenchuxiaofugui <1311027364@qq.com>
fa14124 to
d2dca75
Compare
wangxiyuan
approved these changes
Dec 31, 2025
845473182
pushed a commit
to 845473182/vllm-ascend
that referenced
this pull request
Dec 31, 2025
…to FIA_rebase * 'main' of https://github.com/vllm-project/vllm-ascend: [feature] mooncake support pcp/dcp in common conditions (vllm-project#5224) [Bugfix] Fix mm_merge (vllm-project#5249) [Main2Main] Upgrade vllm commit to 1230 (vllm-project#5495) [Feature] Refactor PCP &DCP related code (vllm-project#5214) [main][test] Refactor the mtp and eagle test case (vllm-project#5326) [smoke][bugfix] moe_init_routing_v2 active_expert_range use int type (vllm-project#5521) [2/N] Upgrade nightly doc (vllm-project#5534) [Doc] Add new contributors. (vllm-project#5537) [3/N][Nightly] Move ops tests to nightly (vllm-project#5538)
wangyibo1005
pushed a commit
to wangyibo1005/vllm-ascend
that referenced
this pull request
Dec 31, 2025
…llm-project#5521) ### What this PR does / why we need it? The float kernel of MOE_init_routing_v2 in the dispatch allgather operation does not support tensor format for active_expert_range; it only supports int. PR5311 To unify the variables `local_num_experts` and `self.local_num_experts`, `self.local_num_experts` was used consistently, which led to the subsequent integer type parameter being converted to a tensor type. ### Does this PR introduce _any_ user-facing change? ### How was this patch tested? gsm8k | exact_match,strict-match: ground_truth=0.89 | measured=0.8939 | success=✅ gsm8k | exact_match,flexible-extract: ground_truth=0.85 | measured=0.856 | success=✅ ceval-valid | acc,none: ground_truth=0.84 | measured=0.8373 | success=✅ Model Parameters: {'pretrained': 'Qwen/Qwen3-30B-A3B', 'tensor_parallel_size': 2, 'dtype': 'auto', 'trust_remote_code': False, 'max_model_len': 4096, 'gpu_memory_utilization': 0.6, 'enable_expert_parallel': True} - vLLM version: v0.13.0 - vLLM main: vllm-project/vllm@45c1ca1 Signed-off-by: shenchuxiaofugui <1311027364@qq.com>
Rozwel-dx
pushed a commit
to Rozwel-dx/vllm-ascend
that referenced
this pull request
Jan 8, 2026
…llm-project#5521) ### What this PR does / why we need it? The float kernel of MOE_init_routing_v2 in the dispatch allgather operation does not support tensor format for active_expert_range; it only supports int. PR5311 To unify the variables `local_num_experts` and `self.local_num_experts`, `self.local_num_experts` was used consistently, which led to the subsequent integer type parameter being converted to a tensor type. ### Does this PR introduce _any_ user-facing change? ### How was this patch tested? gsm8k | exact_match,strict-match: ground_truth=0.89 | measured=0.8939 | success=✅ gsm8k | exact_match,flexible-extract: ground_truth=0.85 | measured=0.856 | success=✅ ceval-valid | acc,none: ground_truth=0.84 | measured=0.8373 | success=✅ Model Parameters: {'pretrained': 'Qwen/Qwen3-30B-A3B', 'tensor_parallel_size': 2, 'dtype': 'auto', 'trust_remote_code': False, 'max_model_len': 4096, 'gpu_memory_utilization': 0.6, 'enable_expert_parallel': True} - vLLM version: v0.13.0 - vLLM main: vllm-project/vllm@45c1ca1 Signed-off-by: shenchuxiaofugui <1311027364@qq.com>
ZRJ026
pushed a commit
to ZRJ026/vllm-ascend
that referenced
this pull request
Feb 28, 2026
…llm-project#5521) ### What this PR does / why we need it? The float kernel of MOE_init_routing_v2 in the dispatch allgather operation does not support tensor format for active_expert_range; it only supports int. PR5311 To unify the variables `local_num_experts` and `self.local_num_experts`, `self.local_num_experts` was used consistently, which led to the subsequent integer type parameter being converted to a tensor type. ### Does this PR introduce _any_ user-facing change? ### How was this patch tested? gsm8k | exact_match,strict-match: ground_truth=0.89 | measured=0.8939 | success=✅ gsm8k | exact_match,flexible-extract: ground_truth=0.85 | measured=0.856 | success=✅ ceval-valid | acc,none: ground_truth=0.84 | measured=0.8373 | success=✅ Model Parameters: {'pretrained': 'Qwen/Qwen3-30B-A3B', 'tensor_parallel_size': 2, 'dtype': 'auto', 'trust_remote_code': False, 'max_model_len': 4096, 'gpu_memory_utilization': 0.6, 'enable_expert_parallel': True} - vLLM version: v0.13.0 - vLLM main: vllm-project/vllm@45c1ca1 Signed-off-by: shenchuxiaofugui <1311027364@qq.com> Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
maoxx241
pushed a commit
to maoxx241/vllm-ascend
that referenced
this pull request
Mar 2, 2026
…llm-project#5521) ### What this PR does / why we need it? The float kernel of MOE_init_routing_v2 in the dispatch allgather operation does not support tensor format for active_expert_range; it only supports int. PR5311 To unify the variables `local_num_experts` and `self.local_num_experts`, `self.local_num_experts` was used consistently, which led to the subsequent integer type parameter being converted to a tensor type. ### Does this PR introduce _any_ user-facing change? ### How was this patch tested? gsm8k | exact_match,strict-match: ground_truth=0.89 | measured=0.8939 | success=✅ gsm8k | exact_match,flexible-extract: ground_truth=0.85 | measured=0.856 | success=✅ ceval-valid | acc,none: ground_truth=0.84 | measured=0.8373 | success=✅ Model Parameters: {'pretrained': 'Qwen/Qwen3-30B-A3B', 'tensor_parallel_size': 2, 'dtype': 'auto', 'trust_remote_code': False, 'max_model_len': 4096, 'gpu_memory_utilization': 0.6, 'enable_expert_parallel': True} - vLLM version: v0.13.0 - vLLM main: vllm-project/vllm@45c1ca1 Signed-off-by: shenchuxiaofugui <1311027364@qq.com>
ZRJ026
pushed a commit
to ZRJ026/vllm-ascend
that referenced
this pull request
Mar 4, 2026
…llm-project#5521) ### What this PR does / why we need it? The float kernel of MOE_init_routing_v2 in the dispatch allgather operation does not support tensor format for active_expert_range; it only supports int. PR5311 To unify the variables `local_num_experts` and `self.local_num_experts`, `self.local_num_experts` was used consistently, which led to the subsequent integer type parameter being converted to a tensor type. ### Does this PR introduce _any_ user-facing change? ### How was this patch tested? gsm8k | exact_match,strict-match: ground_truth=0.89 | measured=0.8939 | success=✅ gsm8k | exact_match,flexible-extract: ground_truth=0.85 | measured=0.856 | success=✅ ceval-valid | acc,none: ground_truth=0.84 | measured=0.8373 | success=✅ Model Parameters: {'pretrained': 'Qwen/Qwen3-30B-A3B', 'tensor_parallel_size': 2, 'dtype': 'auto', 'trust_remote_code': False, 'max_model_len': 4096, 'gpu_memory_utilization': 0.6, 'enable_expert_parallel': True} - vLLM version: v0.13.0 - vLLM main: vllm-project/vllm@45c1ca1 Signed-off-by: shenchuxiaofugui <1311027364@qq.com> Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What this PR does / why we need it?
The float kernel of MOE_init_routing_v2 in the dispatch allgather operation does not support tensor format for active_expert_range; it only supports int.
PR5311 To unify the variables
local_num_expertsandself.local_num_experts,self.local_num_expertswas used consistently, which led to the subsequent integer type parameter being converted to a tensor type.Does this PR introduce any user-facing change?
How was this patch tested?
gsm8k | exact_match,strict-match: ground_truth=0.89 | measured=0.8939 | success=✅
gsm8k | exact_match,flexible-extract: ground_truth=0.85 | measured=0.856 | success=✅
ceval-valid | acc,none: ground_truth=0.84 | measured=0.8373 | success=✅
Model Parameters:
{'pretrained': 'Qwen/Qwen3-30B-A3B', 'tensor_parallel_size': 2, 'dtype': 'auto', 'trust_remote_code': False, 'max_model_len': 4096, 'gpu_memory_utilization': 0.6, 'enable_expert_parallel': True}